709 research outputs found

    Cerebellar Morphometry and Cognition in the Context of Chronic Alcohol Consumption and Cigarette Smoking.

    Get PDF
    BackgroundCerebellar atrophy (especially involving the superior-anterior cerebellar vermis) is among the most salient and clinically significant effects of chronic hazardous alcohol consumption on brain structure. Smaller cerebellar volumes are also associated with chronic cigarette smoking. The present study investigated effects of both chronic alcohol consumption and cigarette smoking on cerebellar structure and its relation to performance on select cognitive/behavioral tasks.MethodsUsing T1-weighted Magnetic Resonance Images (MRIs), the Cerebellar Analysis Tool Kit segmented the cerebellum into bilateral hemispheres and 3 vermis parcels from 4 participant groups: smoking (s) and nonsmoking (ns) abstinent alcohol-dependent treatment seekers (ALC) and controls (CON) (i.e., sALC, nsALC, sCON, and nsCON). Cognitive and behavioral data were also obtained.ResultsWe found detrimental effects of chronic drinking on all cerebellar structural measures in ALC participants, with largest reductions seen in vermis areas. Furthermore, both smoking groups had smaller volumes of cerebellar hemispheres but not vermis areas compared to their nonsmoking counterparts. In exploratory analyses, smaller cerebellar volumes were related to lower measures of intelligence. In sCON, but not sALC, greater smoking severity was related to smaller cerebellar volume and smaller superior-anterior vermis area. In sALC, greater abstinence duration was associated with larger cerebellar and superior-anterior vermis areas, suggesting some recovery with abstinence.ConclusionsOur results show that both smoking and alcohol status are associated with smaller cerebellar structural measurements, with vermal areas more vulnerable to chronic alcohol consumption and less affected by chronic smoking. These morphometric cerebellar deficits were also associated with lower intelligence and related to duration of abstinence in sALC only

    What’s in a Smile? Initial results of multilevel principal components analysis of facial shape and image texture

    Get PDF
    Multilevel principal components analysis (mPCA) has previously been shown to provide a simple and straightforward method of forming point distribution models that can be used in (active) shape models. Here we extend the mPCA approach to model image texture as well as shape. As a test case, we consider a set of (2D frontal) facial images from a group of 80 Finnish subjects (34 male; 46 female) with two different facial expressions (smiling and neutral) per subject. Shape (in terms of landmark points) and image texture are considered separately in this initial analysis. Three-level models are constructed that contain levels for biological sex, “within-subject” variation (i.e., facial expression), and “between-subject” variation (i.e., all other sources of variation). By considering eigenvalues, we find that the order of importance as sources of variation for facial shape is: facial expression (47.5%), between-subject variations (45.1%), and then biological sex (7.4%). By contrast, the order for image texture is: between-subject variations (55.5%), facial expression (37.1%), and then biological sex (7.4%). The major modes for the facial expression level of the mPCA models clearly reflect changes in increased mouth size and increased prominence of cheeks during smiling for both shape and texture. Even subtle effects such as changes to eyes and nose shape during smile are seen clearly. The major mode for the biological sex level of the mPCA models similarly relates clearly to changes between male and female. Model fits yield “scores” for each principal component that show strong clustering for both shape and texture by biological sex and facial expression at appropriate levels of the model. We conclude that mPCA correctly decomposes sources of variation due to biological sex and facial expression (etc.) and that it provides a reliable method of forming models of both shape and image texture

    Multiscale 3D Shape Analysis using Spherical Wavelets

    Get PDF
    ©2005 Springer. The original publication is available at www.springerlink.com: http://dx.doi.org/10.1007/11566489_57DOI: 10.1007/11566489_57Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data

    A CNN cascade for landmark guided semantic part segmentation

    Get PDF
    This paper proposes a CNN cascade for semantic part segmentation guided by pose-specifc information encoded in terms of a set of landmarks (or keypoints). There is large amount of prior work on each of these tasks separately, yet, to the best of our knowledge, this is the first time in literature that the interplay between pose estimation and semantic part segmentation is investigated. To address this limitation of prior work, in this paper, we propose a CNN cascade of tasks that firstly performs landmark localisation and then uses this information as input for guiding semantic part segmentation. We applied our architecture to the problem of facial part segmentation and report large performance improvement over the standard unguided network on the most challenging face datasets. Testing code and models will be published online at http://cs.nott.ac.uk/~psxasj/

    Fast algorithms for fitting active appearance models to unconstrained images

    Get PDF
    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out” optimization framework that unifies and revises the most well-known optimization problems and solutions in AAMs. Based on this framework, we describe robust simultaneous AAM fitting algorithms the complexity of which is not prohibitive for current systems. We then go on one step further and propose a new approximate project-out AAM fitting algorithm which we coin extended project-out inverse compositional (E-POIC). In contrast to current algorithms, E-POIC is both efficient and robust. Next, we describe a part-based AAM employing a translational motion model, which results in superior fitting and convergence properties. We also show that the proposed AAMs, when trained “in-the-wild” using SIFT descriptors, perform surprisingly well even for the case of unseen unconstrained images. Via a number of experiments on unconstrained human and animal face databases, we show that our combined contributions largely bridge the gap between exact and current approximate methods for AAM fitting and perform comparably with state-of-the-art face alignment algorithms

    Towards Pose-Invariant 2D Face Classification for Surveillance

    Get PDF
    A key problem for "face in the crowd" recognition from existing surveillance cameras in public spaces (such as mass transit centres) is the issue of pose mismatches between probe and gallery faces. In addition to accuracy, scalability is also important, necessarily limiting the complexity of face classification algorithms. In this paper we evaluate recent approaches to the recognition of faces at relatively large pose angles from a gallery of frontal images and propose novel adaptations as well as modifications. Specifically, we compare and contrast the accuracy, robustness and speed of an Active Appearance Model (AAM) based method (where realistic frontal faces are synthesized from non-frontal probe faces) against bag-of-features methods (which are local feature approaches based on block Discrete Cosine Transforms and Gaussian Mixture Models). We show a novel approach where the AAM based technique is sped up by directly obtaining pose-robust features, allowing the omission of the computationally expensive and artefact producing image synthesis step. Additionally, we adapt a histogram-based bag-of-features technique to face classification and contrast its properties to a previously proposed direct bag-of-features method. We also show that the two bag-of-features approaches can be considerably sped up, without a loss in classification accuracy, via an approximation of the exponential function. Experiments on the FERET and PIE databases suggest that the bag-of-features techniques generally attain better performance, with significantly lower computational loads. The histogram-based bag-of-features technique is capable of achieving an average recognition accuracy of 89% for pose angles of around 25 degrees
    • …
    corecore